277 research outputs found
Algorithmic Fairness from a Non-ideal Perspective
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles
faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a
reinterpretation of impossibility results, and directions for future researc
On the Actionability of Outcome Prediction
Predicting future outcomes is a prevalent application of machine learning in
social impact domains. Examples range from predicting student success in
education to predicting disease risk in healthcare. Practitioners recognize
that the ultimate goal is not just to predict but to act effectively.
Increasing evidence suggests that relying on outcome predictions for downstream
interventions may not have desired results.
In most domains there exists a multitude of possible interventions for each
individual, making the challenge of taking effective action more acute. Even
when causal mechanisms connecting the individual's latent states to outcomes is
well understood, in any given instance (a specific student or patient),
practitioners still need to infer -- from budgeted measurements of latent
states -- which of many possible interventions will be most effective for this
individual. With this in mind, we ask: when are accurate predictors of outcomes
helpful for identifying the most suitable intervention?
Through a simple model encompassing actions, latent states, and measurements,
we demonstrate that pure outcome prediction rarely results in the most
effective policy for taking actions, even when combined with other
measurements. We find that except in cases where there is a single decisive
action for improving the outcome, outcome prediction never maximizes "action
value", the utility of taking actions. Making measurements of actionable latent
states, where specific actions lead to desired outcomes, considerably enhances
the action value compared to outcome prediction, and the degree of improvement
depends on action costs and the outcome model. This analysis emphasizes the
need to go beyond generic outcome prediction in interventional settings by
incorporating knowledge of plausible actions and latent states.Comment: 14 pages, 3 figure
From Fair Decision Making to Social Equality
The study of fairness in intelligent decision systems has mostly ignored
long-term influence on the underlying population. Yet fairness considerations
(e.g. affirmative action) have often the implicit goal of achieving balance
among groups within the population. The most basic notion of balance is
eventual equality between the qualifications of the groups. How can we
incorporate influence dynamics in decision making? How well do
dynamics-oblivious fairness policies fare in terms of reaching equality? In
this paper, we propose a simple yet revealing model that encompasses (1) a
selection process where an institution chooses from multiple groups according
to their qualifications so as to maximize an institutional utility and (2)
dynamics that govern the evolution of the groups' qualifications according to
the imposed policies. We focus on demographic parity as the formalism of
affirmative action.
We then give conditions under which an unconstrained policy reaches equality
on its own. In this case, surprisingly, imposing demographic parity may break
equality. When it doesn't, one would expect the additional constraint to reduce
utility, however, we show that utility may in fact increase. In more realistic
scenarios, unconstrained policies do not lead to equality. In such cases, we
show that although imposing demographic parity may remedy it, there is a danger
that groups settle at a worse set of qualifications. As a silver lining, we
also identify when the constraint not only leads to equality, but also improves
all groups. This gives quantifiable insight into both sides of the mismatch
hypothesis. These cases and trade-offs are instrumental in determining when and
how imposing demographic parity can be beneficial in selection processes, both
for the institution and for society on the long run.Comment: Short version appears in the proceedings of ACM FAT* 201
- …